Random Utility Agents

Often ABMs have fishermen making discrete choices through random utility models (think Steve’s paper or Gao and Hailu which as far as I know is the most cited fishery ABM). Basic idea there is that the agent utility from going to fish at location \(i\) is: \[ U_i = \beta_i X_i + \epsilon \] Where \(X\) are a series of factors influencing decision (say distance, risk, habit and so on).

If you assume that \(\epsilon\) is Gumbel distributed then you can collapse the decision making problem of the agent as choosing each possible fishing spot \(i\) with probability: \[ \text{Pr}(i) = \frac{e^{\beta_i X_i}}{{ \sum e^{\beta_j X_j}}} \] Which is handy because it’s basically the softmax bandit we already coded except using \(\beta X\) instead of memorized profits.

Now imagine that the agent is all knowing and profit maximizing, but still uses random utility (maybe trembling hand, maybe some real white noise in selection) where the utility of going to location \(i\) is just: \[ U_i = \beta \Pi_i + \epsilon \] that is, a multiple of the profits \(\Pi\) at location \(i\).

Now \(\beta\) represents how much the profitability matters with respect to the error. The lower the \(\beta\) the more the agent will perform randomly. The higher the \(\beta\) the more profit maximizing it gets.

So for example imagine a pseudo-biology where fish is initially very close to port but over the year it moves further away (fishing doesn’t kill fish here to keep things super easy):

If agents have \(\beta = 100\), then they will target throughout the year always the very best location only

Otherwise if they have \(\beta = 1\) then there is quite a lot more randomness in people’s choices. This isn’t visible immediately in the first few seconds because fishing near port is initially so much more profitable than anywhere else but when fish starts moving out you see fishers spreading out too.

Why bother?

So the idea here is to test how good indirect inference is at fitting our model to logbook data. We can use these random utility agents to generate synthetic data which we can fit against.
Things I care about is how much indirect inference is influenced by:

  • Mispecified models
  • Agents heterogeneity
  • Low number of observations
  • Policy shocks

All the coding items are in place. Now all I need to do is find some free time and get to work.